蛋白质是人类生命的重要组成部分,其结构对于功能和机制分析很重要。最近的工作表明了AI驱动方法对蛋白质结构预测的潜力。但是,新模型的开发受到数据集和基准测试培训程序的限制。据我们所知,现有的开源数据集远不足以满足现代蛋白质序列相关研究的需求。为了解决这个问题,我们介绍了具有高覆盖率和多样性的第一个百万级蛋白质结构预测数据集,称为PSP。该数据集由570K真实结构序列(10TB)和745K互补蒸馏序列(15TB)组成。此外,我们还提供了该数据集上SOTA蛋白结构预测模型的基准测试训练程序。我们通过参与客串比赛验证该数据集的实用程序进行培训,我们的模特赢得了第一名。我们希望我们的PSP数据集以及培训基准能够为AI驱动的蛋白质相关研究提供更广泛的AI/生物学研究人员社区。
translated by 谷歌翻译
潜在空间基于能量的模型(EBM),也称为基于能量的先验,引起了对生成建模的日益兴趣。由于其在潜在空间的配方和强大的建模能力方面的灵活性所推动,最近构建的作品已经进行了有趣的尝试,目的是针对文本建模的解释性。但是,潜在空间EBM还继承了数据空间中EBM的一些缺陷。实践中退化的MCMC抽样质量会导致培训中的发电质量和不稳定差,尤其是在具有复杂潜在结构的数据上。受到最近的努力的启发,该努力利用扩散恢复的可能性学习是解决抽样问题的一种方法,我们在变异学习框架中引入了扩散模型和潜在空间EBM之间的新型共生,这是潜在扩散能量基于能量的模型。我们与信息瓶颈共同开发基于几何聚类的正则化,以进一步提高学到的潜在空间的质量。对几个具有挑战性的任务进行的实验证明了我们模型在可解释的文本建模上的优越性能而不是强大的同行。
translated by 谷歌翻译
人类除了符号语言之外的图形草图与图形草图沟通。虽然最近对紧急沟通的研究主要关注象征语言,但它们的设置忽略了人类交流中存在的图形草图;他们不考虑演进过程,符号标志系统在iconicity和象征性之间的权衡中出现。在这项工作中,我们采取了第一步,通过两个播放视觉通信游戏的神经代理商来模拟和模拟这种演变过程;发件人通过在画布上素描来与接收器通信。我们设计了一种新颖的加强学习方法,使得代理商共同发展,以成功的沟通和抽象图形惯例。要检查出现的惯例,我们仔细定义了三个关键属性 - iconicity,象征性和语义 - 以及设计评估方法。我们在不同控制下的实验结果与人类图形公约的研究观察一致。注意,我们发现进化的草图可以在适当的环境压力下保持语义的连续体。更有趣的是,共同进化的代理可以基于他们熟悉引用的常规和标志性的通信。我们希望本研究能够铺设与草图的未开发方式进行研究的道路。
translated by 谷歌翻译
智力是通过连接主义或典型主义者实现的吗?虽然连接主义方法取得了超人的性能,但已经越来越多的证据表明,这些特定的特定优势在系统泛化中特别脆弱。这种观察表明了连接主义和典型主义者之间的中央辩论,其中后者不断地倡导认知架构中的代数治疗。在这项工作中,我们遵循典型主义者的呼叫,并提出一种混合方法来提高推理系统的泛化。具体而言,我们展示了具有代数表示的原型,用于乌鸦的渐进矩阵(RPM)的抽象空间 - 时间推理任务,并呈现代数感知神经半符号(Alans)学习者。艾拉斯学习者受到抽象代数和代表理论的动机。它由神经视觉感知前端和代数抽象推理后端组成:前端总结了基于对象的表示的可视信息,而后端将其转换为代数结构,并在飞行中引导隐藏的操作员。稍后执行诱导的操作员以预测答案的表示,并且选择与预测最相似的选择作为解决方案。广泛的实验表明,通过纳入代数处理,艾拉斯学习者优于需要系统泛化的域中的各种纯粹连接主义模型。我们进一步表明学习的代数表示可以通过同构以产生答案来解码。
translated by 谷歌翻译
我们呈现深度区域竞争(DRC),这是一种旨在以完全无监督的方式从图像中提取前景对象的算法。前景提取可以被视为一种特殊的泛型图像分段的情况,专注于从背景中识别和解开对象。在这项工作中,我们通过以专家(MOE)的混合形式的生成图像建模和生成图像建模来重新思考前景提取,我们进一步介绍了学习的像素重新分配作为捕获规律的基本诱导偏差背景区域。通过这种建模,可以通过期望最大化(EM)自然地发现前景背景分区。我们表明,该方法有效利用了在分区过程中混合成分之间的相互作用,该分区过程紧密地连接到区域竞争,是通用图像分割的一个精细方法。实验表明,与现有方法相比,DRC在复杂的真实数据上表现出更具竞争力的性能和具有挑战性的多对象场景。此外,我们认为,即使在训练期间看不见的类别,DRC也可能概括为新的前景物体。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems. It is assumed that constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed via inexact probabilistic zeroth- and first-order oracles. Under reasonable assumptions, a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived. Numerical results on standard nonlinear optimization test problems illustrate the advantages and limitations of our proposed method.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译